Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 Set nofile ulimit for loadbalancer container #7344

Conversation

killianmuldoon
Copy link
Contributor

Hardcode a nofile ulimit when running the load balancer container. I set the limit to a quite high number, but I don't think it's required to be that high for CAPD clusters.

This change is intended to solve: docker-library/haproxy#194 which impacts CAPD on Fedora, and possibly other linux distros. In future the addition of Resource setting to the run container config structs could be used to set other kinds of limits e.g. Memory, CPU.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign timothysc for approval by writing /assign @timothysc in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Oct 5, 2022
@k8s-ci-robot k8s-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Oct 5, 2022
@@ -95,6 +97,8 @@ type RunContainerInput struct {
PortMappings []PortMapping
// IPFamily is the IP version to use.
IPFamily clusterv1.ClusterIPFamily
// Resource limits and settings for the container.
Resources dockercontainer.Resources
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could also be exposed at the DockerCluster level for resource limits on the load balancer, but I wanted some feedback before making this a user-facing change.

@killianmuldoon killianmuldoon force-pushed the capd/add-loadbalancer-resources branch from f51da8b to bfb3a3c Compare October 5, 2022 10:41
@killianmuldoon
Copy link
Contributor Author

/hold

Need to be sure this doesn't impact functionality on other systems. I might be safter to set this ulimit at a low-middle level e.g. 8000-20000 rather than a high level in case some platform setups have upper limits.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 5, 2022
@sbueringer sbueringer mentioned this pull request Oct 5, 2022
5 tasks
@randomvariable
Copy link
Member

randomvariable commented Oct 5, 2022

65536 is fairly conservative. The upper limit is based on the highest value of an unsigned int (65536), 10% RAM in KB and the value of the NR_FILE compilation variable, so 65536 will be the lowest number.

That said, you're likely on Fedora to want to set a systemd limit for docker as you're going to run into other issues anyway. I've just rebuilt my desktop, and need to check how i did that.

EDIT: I think this is actually related to cgroupsv2, so we will see it more as more things default to cgroupsv2

@killianmuldoon
Copy link
Contributor Author

EDIT: I think this is actually related to cgroupsv2, so we will see it more as more things default to cgroupsv2

I'm not sure - the issue only started impacting me in recent months and seemed to be dependent on Docker / containerd version. Maybe it's linked to a config there.

@sbueringer
Copy link
Member

65536 is fairly conservative. The upper limit is based on the highest value of an unsigned int (65536), 10% RAM in KB and the value of the NR_FILE compilation variable, so 65536 will be the lowest number.

But I wonder if it's still enough. Considering that 65536 open files sounds like a lot for a haproxy in a CAPD cluster. (but I have no idea how much it usually uses, is there an easy way to check that?)

@sbueringer
Copy link
Member

@killianmuldoon
Considering: haproxy/haproxy#1751 (comment)

What about:

  1. Bumping our kindest/haproxy image from v20210715-a6da3463 to v20220607-9a4d8d2a (both are using haproxy 2.2.9)?
  2. Trying to get haproxy bumped to a recent version in kind? (or maybe as a first test, trying to build the image with a new haproxy version and testing if the issue would disappear)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 4, 2023
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 20, 2023
@k8s-ci-robot
Copy link
Contributor

@killianmuldoon: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 19, 2023
@jimmidyson
Copy link
Member

I've tested with HAProxy 2.6 (HAProxy version 2.6.9-1~bpo11+1 2023/02/15 - https://haproxy.org/ to be precise) and I still hit this issue locally.

I have built capd with this patch, using existing HAProxy image, and it works like a charm.

Is there anything I can do to help get this merged?

@killianmuldoon
Copy link
Contributor Author

Is there anything I can do to help get this merged?

I need time to get back to this 🙃 . I've been using the workaround of setting the ulimits globally in my docker config for now while we figure out the right way to do this. Currently as a workaround I have the following in my systemd unit file

ExecStart=/usr/bin/dockerd --default-ulimit nofile=65883:65883 -H fd:// --containerd=/run/containerd/containerd.sock

The default-ulimit arg takes care of this problem, but I would like to configure haproxy correctly so it's not an issue for other users.

BTW - if you're interested in picking this up I'm happy to hand it over!

@randomvariable
Copy link
Member

The last time we discussed this, I was ok with the upper number. Especially for MacOS, the worst thing that happens is you have to restart your Docker Engine. And this is all local execution so there's no remote attack vector.

@dlipovetsky
Copy link
Contributor

dlipovetsky commented Mar 7, 2023

I described the the root cause in kubernetes-sigs/kind#2954 (comment), and fixed it in kubernetes-sigs/kind#3115.

In light of that, I don't think we should change the file descriptor limit here, unless we have other reasons to do so.

I presume the next kind release will have the fix above. For now, I use a workaround similar to what @killianmuldoon described in #7344 (comment).

@killianmuldoon
Copy link
Contributor Author

/close

This has been partially fixed by #8246. The current state is that the haproxy image will init and likely crash on startup, but once CAPD is writing the config it will be stable.

The final fix will be to pick up a new kindest/haproxy image once they publish one, or to move to haproxytech/haproxy-alpine, either of which include the maxconn haproxy.cfg. I'll open an issue to track that update.

@k8s-ci-robot
Copy link
Contributor

@killianmuldoon: Closed this PR.

In response to this:

/close

This has been partially fixed by #8246. The current state is that the haproxy image will init and likely crash on startup, but once CAPD is writing the config it will be stable.

The final fix will be to pick up a new kindest/haproxy image once they publish one, or to move to haproxytech/haproxy-alpine, either of which include the maxconn haproxy.cfg. I'll open an issue to track that update.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants